Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🐛 skip drain node if DrainingSucceeded condition is already true #11050

Closed
wants to merge 1 commit into from

Conversation

liuxu623
Copy link

@liuxu623 liuxu623 commented Aug 13, 2024

What this PR does / why we need it:

If DrainingSucceeded condition is already true, we shouldn't drain node again.

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/needs-area PR is missing an area label labels Aug 13, 2024
@k8s-ci-robot
Copy link
Contributor

Welcome @liuxu623!

It looks like this is your first PR to kubernetes-sigs/cluster-api 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes-sigs/cluster-api has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot
Copy link
Contributor

Hi @liuxu623. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign killianmuldoon for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. labels Aug 13, 2024
@liuxu623
Copy link
Author

/area machine

@k8s-ci-robot k8s-ci-robot added area/machine Issues or PRs related to machine lifecycle management and removed do-not-merge/needs-area PR is missing an area label labels Aug 13, 2024
@liuxu623 liuxu623 changed the title 🐛 skip draion node if DrainingSucceeded condition is already true 🐛 skip drain node if DrainingSucceeded condition is already true Aug 13, 2024
Copy link
Member

@chrischdi chrischdi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Aug 15, 2024
@sbueringer
Copy link
Member

sbueringer commented Aug 20, 2024

I'm not sure if we should skip draining just because it succeeded in the past
Also not sure if this will be a separate condition going forward (see: #10897)

What is your use case for this?

@liuxu623
Copy link
Author

I'm not sure if we should skip draining just because it succeeded in the past Also not sure if this will be a separate condition going forward (see: #10897)

What is your use case for this?

@sbueringer In our environment, when I scale down a MachineDeployment, I found it will stuck at drain node.

  1. scale down MachineDeployment
  2. delete Machine
  3. drain node success
  4. delete InfraMachine
  5. delete instance (like ec2)
  6. drain node again but failed, because the instance has been deleted

@sbueringer
Copy link
Member

If the node doesn't exist anymore "drain node" should just skip over the drain

@sbueringer
Copy link
Member

Can you provide some logs so we can figure out why drainNode is not skipped?

Context:

if apierrors.IsNotFound(err) {
// If an admin deletes the node directly, we'll end up here.
log.Error(err, "Could not find node from noderef, it may have already been deleted")
return ctrl.Result{}, nil
}

@liuxu623
Copy link
Author

@liuxu623
Copy link
Author

after delete InfraMachine, MachineController will return at https://github.com/kubernetes-sigs/cluster-api/blob/v1.8.1/internal/controllers/machine/machine_controller.go#L452

@sbueringer
Copy link
Member

And at which point is it failing?

@liuxu623
Copy link
Author

And at which point is it failing?

@sbueringer when we delete a machine

first Reconcile

after InfraMachine deleted
second Reconcile

  • drain node failed

@sbueringer
Copy link
Member

It's entirely unclear to me how "drain node failed" happens.

We have this case where we just skip the drain if the node doesn't exist:

if apierrors.IsNotFound(err) {
// If an admin deletes the node directly, we'll end up here.
log.Error(err, "Could not find node from noderef, it may have already been deleted")
return ctrl.Result{}, nil
}

@liuxu623
Copy link
Author

It's entirely unclear to me how "drain node failed" happens.

We have this case where we just skip the drain if the node doesn't exist:

if apierrors.IsNotFound(err) {
// If an admin deletes the node directly, we'll end up here.
log.Error(err, "Could not find node from noderef, it may have already been deleted")
return ctrl.Result{}, nil
}

Why did you think the node is not exist? Machine controller has not deleted node...

@sbueringer
Copy link
Member

I think I was misreading one of your comments. This would be easier if you could provide logs

@liuxu623
Copy link
Author

I0821 11:45:06.085980       1 recorder.go:104] "events: Scaled MachineSet rke-73e42456/default-lbmsh: 2 -> 1" type="Normal" object={"kind":"MachineDeployment","namespace":"rke-73e42456","name":"default","uid":"48b3b93f-1589-43b5-bd94-dd62e061bc59","apiVersion":"cluster.x-k8s.io/v1beta1","resourceVersion":"165157570"} reason="SuccessfulScale"
I0821 11:45:06.086488       1 machineset_controller.go:552] "MachineSet is scaling down to 1 replicas by deleting 1 machines" controller="machineset" controllerGroup="cluster.x-k8s.io" controllerKind="MachineSet" MachineSet="rke-73e42456/default-lbmsh" namespace="rke-73e42456" name="default-lbmsh" reconcileID="75a44998-2ae5-4d3e-aca7-4909a0fa9d86" MachineDeployment="rke-73e42456/default" Cluster="rke-73e42456/rke-73e42456" replicas=1 machineCount=2 deletePolicy="Random"
I0821 11:45:06.086510       1 machineset_controller.go:564] "Deleting machine 1 of 1" controller="machineset" controllerGroup="cluster.x-k8s.io" controllerKind="MachineSet" MachineSet="rke-73e42456/default-lbmsh" namespace="rke-73e42456" name="default-lbmsh" reconcileID="75a44998-2ae5-4d3e-aca7-4909a0fa9d86" MachineDeployment="rke-73e42456/default" Cluster="rke-73e42456/rke-73e42456" Machine="rke-73e42456/default-lbmsh-7s754"
I0821 11:45:06.089761       1 recorder.go:104] "events: Deleted machine \"default-lbmsh-7s754\"" type="Normal" object={"kind":"MachineSet","namespace":"rke-73e42456","name":"default-lbmsh","uid":"ada87fbd-e054-4d42-bf9b-3b3b9c8052dd","apiVersion":"cluster.x-k8s.io/v1beta1","resourceVersion":"165157572"} reason="SuccessfulDelete"
I0821 11:45:06.092071       1 machine_controller.go:362] "Draining node" controller="machine" controllerGroup="cluster.x-k8s.io" controllerKind="Machine" Machine="rke-73e42456/default-lbmsh-7s754" namespace="rke-73e42456" name="default-lbmsh-7s754" reconcileID="5d33d756-f8e2-4fbf-9313-c2f2dfc77a8a" MachineSet="rke-73e42456/default-lbmsh" MachineDeployment="rke-73e42456/default" Cluster="rke-73e42456/rke-73e42456" Node="10.112.54.33"
E0821 11:45:06.121478       1 machine_controller.go:652] "WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-m82v6, kube-system/kube-proxy-5jwm6, kube-system/node-local-dns-6vxlw\n" controller="machine" controllerGroup="cluster.x-k8s.io" controllerKind="Machine" Machine="rke-73e42456/default-lbmsh-7s754" namespace="rke-73e42456" name="default-lbmsh-7s754" reconcileID="5d33d756-f8e2-4fbf-9313-c2f2dfc77a8a" MachineSet="rke-73e42456/default-lbmsh" MachineDeployment="rke-73e42456/default" Cluster="rke-73e42456/rke-73e42456" Node="10.112.54.33"
I0821 11:45:06.122644       1 machine_controller.go:910] "evicting pod monitoring/kube-state-metrics-r2-0\n" controller="machine" controllerGroup="cluster.x-k8s.io" controllerKind="Machine" Machine="rke-73e42456/default-lbmsh-7s754" namespace="rke-73e42456" name="default-lbmsh-7s754" reconcileID="5d33d756-f8e2-4fbf-9313-c2f2dfc77a8a" MachineSet="rke-73e42456/default-lbmsh" MachineDeployment="rke-73e42456/default" Cluster="rke-73e42456/rke-73e42456" Node="10.112.54.33"
I0821 11:45:37.338846       1 machine_controller.go:673] "Drain successful" controller="machine" controllerGroup="cluster.x-k8s.io" controllerKind="Machine" Machine="rke-73e42456/default-lbmsh-7s754" namespace="rke-73e42456" name="default-lbmsh-7s754" reconcileID="e840ff97-d31b-4312-a4ba-8e950bcb9ac3" MachineSet="rke-73e42456/default-lbmsh" MachineDeployment="rke-73e42456/default" Cluster="rke-73e42456/rke-73e42456" Node="10.112.54.33"
I0821 11:45:37.338958       1 recorder.go:104] "events: success draining Machine's node \"10.112.54.33\"" type="Normal" object={"kind":"Machine","namespace":"rke-73e42456","name":"default-lbmsh-7s754","uid":"f9aff45c-33fb-4dd6-aad1-b9478ca23d36","apiVersion":"cluster.x-k8s.io/v1beta1","resourceVersion":"165157855"} reason="SuccessfulDrainNode"
I0821 11:45:37.338975       1 recorder.go:104] "events: success waiting for node volumes detaching Machine's node \"10.112.54.33\"" type="Normal" object={"kind":"Machine","namespace":"rke-73e42456","name":"default-lbmsh-7s754","uid":"f9aff45c-33fb-4dd6-aad1-b9478ca23d36","apiVersion":"cluster.x-k8s.io/v1beta1","resourceVersion":"165157855"} reason="NodeVolumesDetached"
I0821 11:45:37.352469       1 machine_controller.go:435] "Waiting for infrastructure to be deleted" controller="machine" controllerGroup="cluster.x-k8s.io" controllerKind="Machine" Machine="rke-73e42456/default-lbmsh-7s754" namespace="rke-73e42456" name="default-lbmsh-7s754" reconcileID="e840ff97-d31b-4312-a4ba-8e950bcb9ac3" MachineSet="rke-73e42456/default-lbmsh" MachineDeployment="rke-73e42456/default" Cluster="rke-73e42456/rke-73e42456" RedMachine="rke-73e42456/default-pp48j"
I0821 11:45:37.358408       1 machineset_controller.go:552] "MachineSet is scaling down to 1 replicas by deleting 1 machines" controller="machineset" controllerGroup="cluster.x-k8s.io" controllerKind="MachineSet" MachineSet="rke-73e42456/default-lbmsh" namespace="rke-73e42456" name="default-lbmsh" reconcileID="d6d36844-833d-4a7e-8011-4aa04dc52e98" MachineDeployment="rke-73e42456/default" Cluster="rke-73e42456/rke-73e42456" replicas=1 machineCount=2 deletePolicy="Random"
I0821 11:45:37.358435       1 machineset_controller.go:573] "Waiting for machine 1 of 1 to be deleted" controller="machineset" controllerGroup="cluster.x-k8s.io" controllerKind="MachineSet" MachineSet="rke-73e42456/default-lbmsh" namespace="rke-73e42456" name="default-lbmsh" reconcileID="d6d36844-833d-4a7e-8011-4aa04dc52e98" MachineDeployment="rke-73e42456/default" Cluster="rke-73e42456/rke-73e42456" Machine="rke-73e42456/default-lbmsh-7s754"
I0821 11:45:37.363885       1 machine_controller.go:362] "Draining node" controller="machine" controllerGroup="cluster.x-k8s.io" controllerKind="Machine" Machine="rke-73e42456/default-lbmsh-7s754" namespace="rke-73e42456" name="default-lbmsh-7s754" reconcileID="4ba63488-56f9-4a60-86cc-e8e286f8bc9e" MachineSet="rke-73e42456/default-lbmsh" MachineDeployment="rke-73e42456/default" Cluster="rke-73e42456/rke-73e42456" Node="10.112.54.33"
E0821 11:45:37.376433       1 machine_controller.go:652] "WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-m82v6, kube-system/kube-proxy-5jwm6, kube-system/node-local-dns-6vxlw\n" controller="machine" controllerGroup="cluster.x-k8s.io" controllerKind="Machine" Machine="rke-73e42456/default-lbmsh-7s754" namespace="rke-73e42456" name="default-lbmsh-7s754" reconcileID="4ba63488-56f9-4a60-86cc-e8e286f8bc9e" MachineSet="rke-73e42456/default-lbmsh" MachineDeployment="rke-73e42456/default" Cluster="rke-73e42456/rke-73e42456" Node="10.112.54.33"
I0821 11:45:37.377534       1 machine_controller.go:910] "evicting pod monitoring/kube-state-metrics-r2-0\n" controller="machine" controllerGroup="cluster.x-k8s.io" controllerKind="Machine" Machine="rke-73e42456/default-lbmsh-7s754" namespace="rke-73e42456" name="default-lbmsh-7s754" reconcileID="4ba63488-56f9-4a60-86cc-e8e286f8bc9e" MachineSet="rke-73e42456/default-lbmsh" MachineDeployment="rke-73e42456/default" Cluster="rke-73e42456/rke-73e42456" Node="10.112.54.33"
I0821 11:45:39.387568       1 machine_controller.go:647] "Evicted pod from Node" controller="machine" controllerGroup="cluster.x-k8s.io" controllerKind="Machine" Machine="rke-73e42456/default-lbmsh-7s754" namespace="rke-73e42456" name="default-lbmsh-7s754" reconcileID="4ba63488-56f9-4a60-86cc-e8e286f8bc9e" MachineSet="rke-73e42456/default-lbmsh" MachineDeployment="rke-73e42456/default" Cluster="rke-73e42456/rke-73e42456" Node="10.112.54.33" Pod="monitoring/kube-state-metrics-r2-0"
I0821 11:45:45.388567       1 machine_controller.go:673] "Drain successful" controller="machine" controllerGroup="cluster.x-k8s.io" controllerKind="Machine" Machine="rke-73e42456/default-lbmsh-7s754" namespace="rke-73e42456" name="default-lbmsh-7s754" reconcileID="4ba63488-56f9-4a60-86cc-e8e286f8bc9e" MachineSet="rke-73e42456/default-lbmsh" MachineDeployment="rke-73e42456/default" Cluster="rke-73e42456/rke-73e42456" Node="10.112.54.33"
I0821 11:45:45.388699       1 recorder.go:104] "events: success draining Machine's node \"10.112.54.33\"" type="Normal" object={"kind":"Machine","namespace":"rke-73e42456","name":"default-lbmsh-7s754","uid":"f9aff45c-33fb-4dd6-aad1-b9478ca23d36","apiVersion":"cluster.x-k8s.io/v1beta1","resourceVersion":"165158008"} reason="SuccessfulDrainNode"
I0821 11:45:45.388715       1 recorder.go:104] "events: success waiting for node volumes detaching Machine's node \"10.112.54.33\"" type="Normal" object={"kind":"Machine","namespace":"rke-73e42456","name":"default-lbmsh-7s754","uid":"f9aff45c-33fb-4dd6-aad1-b9478ca23d36","apiVersion":"cluster.x-k8s.io/v1beta1","resourceVersion":"165158008"} reason="NodeVolumesDetached"
I0821 11:45:45.392537       1 machine_controller.go:435] "Waiting for infrastructure to be deleted" controller="machine" controllerGroup="cluster.x-k8s.io" controllerKind="Machine" Machine="rke-73e42456/default-lbmsh-7s754" namespace="rke-73e42456" name="default-lbmsh-7s754" reconcileID="4ba63488-56f9-4a60-86cc-e8e286f8bc9e" MachineSet="rke-73e42456/default-lbmsh" MachineDeployment="rke-73e42456/default" Cluster="rke-73e42456/rke-73e42456" RedMachine="rke-73e42456/default-pp48j"
I0821 11:45:45.398283       1 machineset_controller.go:552] "MachineSet is scaling down to 1 replicas by deleting 1 machines" controller="machineset" controllerGroup="cluster.x-k8s.io" controllerKind="MachineSet" MachineSet="rke-73e42456/default-lbmsh" namespace="rke-73e42456" name="default-lbmsh" reconcileID="04b06162-660d-404b-82ec-587ff2882392" MachineDeployment="rke-73e42456/default" Cluster="rke-73e42456/rke-73e42456" replicas=1 machineCount=2 deletePolicy="Random"
I0821 11:45:45.398304       1 machineset_controller.go:573] "Waiting for machine 1 of 1 to be deleted" controller="machineset" controllerGroup="cluster.x-k8s.io" controllerKind="MachineSet" MachineSet="rke-73e42456/default-lbmsh" namespace="rke-73e42456" name="default-lbmsh" reconcileID="04b06162-660d-404b-82ec-587ff2882392" MachineDeployment="rke-73e42456/default" Cluster="rke-73e42456/rke-73e42456" Machine="rke-73e42456/default-lbmsh-7s754"
I0821 11:45:45.404130       1 machine_controller.go:362] "Draining node" controller="machine" controllerGroup="cluster.x-k8s.io" controllerKind="Machine" Machine="rke-73e42456/default-lbmsh-7s754" namespace="rke-73e42456" name="default-lbmsh-7s754" reconcileID="52a2a7fb-7aaf-462e-8e30-519528827015" MachineSet="rke-73e42456/default-lbmsh" MachineDeployment="rke-73e42456/default" Cluster="rke-73e42456/rke-73e42456" Node="10.112.54.33"
I0821 11:45:45.407307       1 machineset_controller.go:552] "MachineSet is scaling down to 1 replicas by deleting 1 machines" controller="machineset" controllerGroup="cluster.x-k8s.io" controllerKind="MachineSet" MachineSet="rke-73e42456/default-lbmsh" namespace="rke-73e42456" name="default-lbmsh" reconcileID="f8bdd339-b7b3-4669-8f06-4d4a505b085b" MachineDeployment="rke-73e42456/default" Cluster="rke-73e42456/rke-73e42456" replicas=1 machineCount=2 deletePolicy="Random"
I0821 11:45:45.407327       1 machineset_controller.go:573] "Waiting for machine 1 of 1 to be deleted" controller="machineset" controllerGroup="cluster.x-k8s.io" controllerKind="MachineSet" MachineSet="rke-73e42456/default-lbmsh" namespace="rke-73e42456" name="default-lbmsh" reconcileID="f8bdd339-b7b3-4669-8f06-4d4a505b085b" MachineDeployment="rke-73e42456/default" Cluster="rke-73e42456/rke-73e42456" Machine="rke-73e42456/default-lbmsh-7s754"
E0821 11:45:45.418871       1 machine_controller.go:652] "WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-m82v6, kube-system/kube-proxy-5jwm6, kube-system/node-local-dns-6vxlw\n" controller="machine" controllerGroup="cluster.x-k8s.io" controllerKind="Machine" Machine="rke-73e42456/default-lbmsh-7s754" namespace="rke-73e42456" name="default-lbmsh-7s754" reconcileID="52a2a7fb-7aaf-462e-8e30-519528827015" MachineSet="rke-73e42456/default-lbmsh" MachineDeployment="rke-73e42456/default" Cluster="rke-73e42456/rke-73e42456" Node="10.112.54.33"
I0821 11:45:45.420084       1 machine_controller.go:910] "evicting pod monitoring/kube-state-metrics-r2-0\n" controller="machine" controllerGroup="cluster.x-k8s.io" controllerKind="Machine" Machine="rke-73e42456/default-lbmsh-7s754" namespace="rke-73e42456" name="default-lbmsh-7s754" reconcileID="52a2a7fb-7aaf-462e-8e30-519528827015" MachineSet="rke-73e42456/default-lbmsh" MachineDeployment="rke-73e42456/default" Cluster="rke-73e42456/rke-73e42456" Node="10.112.54.33"
I0821 11:45:45.420099       1 machine_controller.go:910] "evicting pod devops/ilogtail-service-default-qcrgb\n" controller="machine" controllerGroup="cluster.x-k8s.io" controllerKind="Machine" Machine="rke-73e42456/default-lbmsh-7s754" namespace="rke-73e42456" name="default-lbmsh-7s754" reconcileID="52a2a7fb-7aaf-462e-8e30-519528827015" MachineSet="rke-73e42456/default-lbmsh" MachineDeployment="rke-73e42456/default" Cluster="rke-73e42456/rke-73e42456" Node="10.112.54.33"
I0821 11:45:51.205711       1 machineset_controller.go:552] "MachineSet is scaling down to 1 replicas by deleting 1 machines" controller="machineset" controllerGroup="cluster.x-k8s.io" controllerKind="MachineSet" MachineSet="rke-73e42456/default-lbmsh" namespace="rke-73e42456" name="default-lbmsh" reconcileID="d9f5e2bd-e071-4a6c-a682-e94b05403a4e" MachineDeployment="rke-73e42456/default" Cluster="rke-73e42456/rke-73e42456" replicas=1 machineCount=2 deletePolicy="Random"
I0821 11:45:51.205734       1 machineset_controller.go:573] "Waiting for machine 1 of 1 to be deleted" controller="machineset" controllerGroup="cluster.x-k8s.io" controllerKind="MachineSet" MachineSet="rke-73e42456/default-lbmsh" namespace="rke-73e42456" name="default-lbmsh" reconcileID="d9f5e2bd-e071-4a6c-a682-e94b05403a4e" MachineDeployment="rke-73e42456/default" Cluster="rke-73e42456/rke-73e42456" Machine="rke-73e42456/default-lbmsh-7s754"
E0821 11:46:05.431457       1 machine_controller.go:669] "Drain failed, retry in 20s" err="[error when waiting for pod \"ilogtail-service-default-qcrgb\" in namespace \"devops\" to terminate: global timeout reached: 20s, error when waiting for pod \"kube-state-metrics-r2-0\" in namespace \"monitoring\" to terminate: global timeout reached: 20s]" controller="machine" controllerGroup="cluster.x-k8s.io" controllerKind="Machine" Machine="rke-73e42456/default-lbmsh-7s754" namespace="rke-73e42456" name="default-lbmsh-7s754" reconcileID="52a2a7fb-7aaf-462e-8e30-519528827015" MachineSet="rke-73e42456/default-lbmsh" MachineDeployment="rke-73e42456/default" Cluster="rke-73e42456/rke-73e42456" Node="10.112.54.33"

@liuxu623
Copy link
Author

@sbueringer Sorry, I try reproduce the problem and check the log, I found the root case is some workload's tolerations is

tolerations:
- operator: "Exists"
  effect: "NoSchedule"

It will recreate pod after drain success.

@sbueringer
Copy link
Member

sbueringer commented Aug 21, 2024

Wondering if this can be solved with a similar change to this one: #11024 (comment)

(Heads up: I'm currently refactoring this whole drain behavior here: #11074 Should be done in a few days)

@sbueringer
Copy link
Member

I would recommend to join the discussion on this issue: #11024

@@ -368,7 +368,7 @@ func (r *Reconciler) reconcileDelete(ctx context.Context, cluster *clusterv1.Clu
conditions.MarkTrue(m, clusterv1.PreDrainDeleteHookSucceededCondition)

// Drain node before deletion and issue a patch in order to make this operation visible to the users.
if r.isNodeDrainAllowed(m) {
if r.isNodeDrainAllowed(m) && !conditions.IsTrue(m, clusterv1.DrainingSucceededCondition) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this true? When might we need to perform a second drain?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my opinion we should not skip drain if it completed once.

This would somehow allow to not drain again once all pods were gone, but this is only relevant in cases where Pods are rescheduled. But if one of these Pods has a volume we're then stuck in wait for volume detach

@sbueringer
Copy link
Member

/hold

See comments above

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Oct 2, 2024
@enxebre
Copy link
Member

enxebre commented Nov 5, 2024

@sbueringer Sorry, I try reproduce the problem and check the log, I found the root case is some workload's tolerations is
tolerations:

  • operator: "Exists"
    effect: "NoSchedule"
    It will recreate pod after drain success.

@liuxu623 this scenario is now covered by #11241, you ok to close this?

@liuxu623
Copy link
Author

liuxu623 commented Nov 8, 2024

@sbueringer Sorry, I try reproduce the problem and check the log, I found the root case is some workload's tolerations is
tolerations:

  • operator: "Exists"
    effect: "NoSchedule"
    It will recreate pod after drain success.

@liuxu623 this scenario is now covered by #11241, you ok to close this?

ok

@liuxu623 liuxu623 closed this Nov 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/machine Issues or PRs related to machine lifecycle management cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants